Network with Sub-networks: Layer-wise Detachable Neural Network
نویسندگان
چکیده
منابع مشابه
Flexible Network Binarization with Layer-wise Priority
How to effectively approximate real-valued parameters with binary codes plays a central role in neural network binarization. In this work, we reveal an important fact that binarizing different layers has a widely-varied effect on the compression ratio of network and the loss of performance. Based on this fact, we propose a novel and flexible neural network binarization method by introducing the...
متن کاملLayer-wise Relevance Propagation for Deep Neural Network Architectures
We present the application of layer-wise relevance propagation to several deep neural networks such as the BVLC reference neural net and googlenet trained on ImageNet and MIT Places datasets. Layerwise relevance propagation is a method to compute scores for image pixels and image regions denoting the impact of the particular image region on the prediction of the classifier for one particular te...
متن کاملPoint-wise Convolutional Neural Network
Deep learning with 3D data such as reconstructed point clouds and CAD models has received great research interests recently. However, the capability of using point clouds with convolutional neural network has been so far not fully explored. In this paper, we present a convolutional neural network for semantic segmentation and object recognition with 3D point clouds. At the core of our network i...
متن کاملLayer-wise Learning of Stochastic Neural Networks with Information Bottleneck
In this paper, we present a layer-wise learning of stochastic neural networks (SNNs) in an information-theoretic perspective. In each layer of an SNN, the compression and the relevance are defined to quantify the amount of information that the layer contains about the input space and the target space, respectively. We jointly optimize the compression and the relevance of all parameters in an SN...
متن کاملTrain Feedfoward Neural Network with Layer-wise Adaptive Rate via Approximating Back-matching Propagation
Stochastic gradient descent (SGD) has achieved great success in training deep neural network, where the gradient is computed through backpropagation. However, the back-propagated values of different layers vary dramatically. This inconsistence of gradient magnitude across different layers renders optimization of deep neural network with a single learning rate problematic. We introduce the back-...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Robotics, Networking and Artificial Life
سال: 2020
ISSN: 2352-6386
DOI: 10.2991/jrnal.k.201215.006